肌肉驱动控制是跨越不同领域的兴趣的研究课题,特别是生物力学,机器人和图形。这种类型的控制尤其具有挑战性,因为模型通常是过度的,并且动态被延迟和非线性。然而,这是一个非常良好的测试和调整的致动模型,该模型经历了数百万年的演变,并且涉及有趣的性质利用肌肉肌腱单元的被动力和有效的能量存储和释放。为了促进肌肉致动模拟研究,我们基于Mujoco模拟器释放鸵鸟的3D肌肉骨骼模拟。 Ostriches是地球上最快的搭配之一,因此是研究肌肉驱动的双模运动的优秀模型。该模型基于CT扫描和解剖,用于收集诸如插入位点,长度和钢圈角度的实际肌肉数据。除此之外,我们还提供一组加强学习任务,包括参考运动跟踪和颈部的达到任务。参考运动数据基于我们预处理和适应我们模型的各种行为的运动捕获剪辑。本文介绍了如何使用任务构建和迭代地改进模型。通过将它们与从机车鸟类的实验收集的电拍摄数据进行比较来评估肌肉致动模式的准确性。我们认为,这项工作可以是生物力学,强化学习,图形和机器人社区之间的有用桥梁,通过提供快速且易于使用的模拟。
translated by 谷歌翻译
Graph neural networks (GNNs) have been shown to be highly sensitive to the choice of aggregation function. While summing over a node's neighbours can approximate any permutation-invariant function over discrete inputs, Cohen-Karlik et al. [2020] proved there are set-aggregation problems for which summing cannot generalise to unbounded inputs, proposing recurrent neural networks regularised towards permutation-invariance as a more expressive aggregator. We show that these results carry over to the graph domain: GNNs equipped with recurrent aggregators are competitive with state-of-the-art permutation-invariant aggregators, on both synthetic benchmarks and real-world problems. However, despite the benefits of recurrent aggregators, their $O(V)$ depth makes them both difficult to parallelise and harder to train on large graphs. Inspired by the observation that a well-behaved aggregator for a GNN is a commutative monoid over its latent space, we propose a framework for constructing learnable, commutative, associative binary operators. And with this, we construct an aggregator of $O(\log V)$ depth, yielding exponential improvements for both parallelism and dependency length while achieving performance competitive with recurrent aggregators. Based on our empirical observations, our proposed learnable commutative monoid (LCM) aggregator represents a favourable tradeoff between efficient and expressive aggregators.
translated by 谷歌翻译
Searching for a path between two nodes in a graph is one of the most well-studied and fundamental problems in computer science. In numerous domains such as robotics, AI, or biology, practitioners develop search heuristics to accelerate their pathfinding algorithms. However, it is a laborious and complex process to hand-design heuristics based on the problem and the structure of a given use case. Here we present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigation heuristics from data by leveraging recent advances in imitation learning and graph representation learning. At training time, we aggregate datasets of search trajectories and ground-truth shortest path distances, which we use to train a specialized graph neural network-based heuristic function using backpropagation through steps of the pathfinding process. Our heuristic function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time. Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 58.5\% on average, can be directly applied in diverse graphs ranging from biological networks to road networks, and allows for fast planning in time-critical robotics domains.
translated by 谷歌翻译
Neural algorithmic reasoning studies the problem of learning algorithms with neural networks, especially with graph architectures. A recent proposal, XLVIN, reaps the benefits of using a graph neural network that simulates the value iteration algorithm in deep reinforcement learning agents. It allows model-free planning without access to privileged information about the environment, which is usually unavailable. However, XLVIN only supports discrete action spaces, and is hence nontrivially applicable to most tasks of real-world interest. We expand XLVIN to continuous action spaces by discretization, and evaluate several selective expansion policies to deal with the large planning graphs. Our proposal, CNAP, demonstrates how neural algorithmic reasoning can make a measurable impact in higher-dimensional continuous control settings, such as MuJoCo, bringing gains in low-data settings and outperforming model-free baselines.
translated by 谷歌翻译
Deploying graph neural networks (GNNs) on whole-graph classification or regression tasks is known to be challenging: it often requires computing node features that are mindful of both local interactions in their neighbourhood and the global context of the graph structure. GNN architectures that navigate this space need to avoid pathological behaviours, such as bottlenecks and oversquashing, while ideally having linear time and space complexity requirements. In this work, we propose an elegant approach based on propagating information over expander graphs. We leverage an efficient method for constructing expander graphs of a given size, and use this insight to propose the EGP model. We show that EGP is able to address all of the above concerns, while requiring minimal effort to set up, and provide evidence of its empirical utility on relevant graph classification datasets and baselines in the Open Graph Benchmark. Importantly, using expander graphs as a template for message passing necessarily gives rise to negative curvature. While this appears to be counterintuitive in light of recent related work on oversquashing, we theoretically demonstrate that negatively curved edges are likely to be required to obtain scalable message passing without bottlenecks. To the best of our knowledge, this is a previously unstudied result in the context of graph representation learning, and we believe our analysis paves the way to a novel class of scalable methods to counter oversquashing in GNNs.
translated by 谷歌翻译
神经算法推理的基石是解决算法任务的能力,尤其是以一种概括分布的方式。尽管近年来,该领域的方法学改进激增,但它们主要集中在建立专家模型上。专业模型能够学习仅执行一种算法或具有相同控制流骨干的算法的集合。相反,在这里,我们专注于构建通才神经算法学习者 - 单个图形神经网络处理器,能够学习执行各种算法,例如分类,搜索,动态编程,路径触发和几何学。我们利用CLRS基准来凭经验表明,就像在感知领域的最新成功一样,通才算法学习者可以通过“合并”知识来构建。也就是说,只要我们能够在单任务制度中学习很好地执行它们,就可以以多任务的方式有效地学习算法。在此激励的基础上,我们为CLR提供了一系列改进,对CLR的输入表示,培训制度和处理器体系结构,将平均单任务性能提高了20%以上。然后,我们进行了多任务学习者的彻底消融,以利用这些改进。我们的结果表明,一位通才学习者有效地结合了专家模型所捕获的知识。
translated by 谷歌翻译
弥补联邦学习(FL)模型的分散培训中所涉及的成本的激励措施是客户长期参与的关键刺激。但是,由于缺乏以下信息,请说服客户在FL上进行质量参与:(i)有关客户数据质量和属性的完整信息; (ii)客户数据贡献的价值; (iii)货币奖励优惠的可信赖机制。这通常会导致培训和沟通效率较差。尽管有几项工作着重于战略激励设计和客户选择以克服这个问题,但就针对预见的数字经济(包括Web 3.0)量身定制的总体设计存在一个重大的知识差距,同时同时实现了学习目标。为了解决这一差距,我们提出了一个基于贡献的令牌化激励方案,即\ texttt {fedToken},并得到区块链技术的支持,可确保在模型培训期间与其数据估值相对应的客户之间的公平分配。利用工程设计的基于Shapley的计划,我们首先近似模型聚合过程中本地模型的贡献,然后战略性地安排客户降低沟通循环的融合和锚定方式,以分配\ emph {负担得起的}代币在受限的货币预算下。广泛的模拟证明了我们提出的方法的功效。
translated by 谷歌翻译
在这项工作中,我们提出了一个框架,用于部署的无人驾驶汽车(UAV)的便携式接入点(PAP),以服务于一组接地节点(GNS)。除PAP和GNS外,该系统还由安装在人造结构上的一组智能反射表面(IRS)组成,以增加每焦耳的能源消耗的钻头数量,这些能量消耗被测量为全球能源效率(GEE)。 PAP的GEE轨迹是通过考虑UAV推进能量消耗和PAP电池的PEUKERT效应来设计的,PAP电池代表了精确的电池放电曲线作为无人机功耗概况的非线性功能。 GEE轨迹设计问题分为两个阶段:在第一个阶段,使用多层圆形填料方法找到了PAP的路径和可行位置,并使用替代方案计算所需的IRS相移值优化方法考虑了IRS元素的幅度和相位响应之间的相互依赖性;在第二阶段,使用新型的多轨迹设计算法计算PAP飞行速度和用户调度。数值评估表明:忽略Peukert效应高估了PAP的可用飞行时间;一定的阈值后,增加电池尺寸会减少PAP的可用飞行时间;与其他基线场景相比,IRS模块的存在改善了系统的GEE。与使用顺序凸编程和Dinkelbach算法的组合开发的单圈轨迹相比,多圈轨迹可节省更多的能量。
translated by 谷歌翻译
在这项工作中,我们研究了一个无人驾驶系统(UAS)的可靠性和投资成本之间的权衡,该系统由一组携带无线电节点的无人机(UAVS)组成,称为Portable Access Points(PAPS)),部署以服务一组地面节点(GNS)。使用所提出的算法,给定的地理区域等效地表示为一组圆形区域,其中每个圆表示PAP的覆盖区域。然后,通过将其建模为连续的时间出生死亡马尔可夫决策过程(MDP),可以在分析上得出UAS的稳态可用性。数值评估表明,可以通过考虑GN的交通需求和分配来降低保证给定稳态可用性的投资成本。
translated by 谷歌翻译
在这项工作中,我们优化了基于无人机(UAV)的便携式接入点(PAP)的3D轨迹,该轨迹为一组接地节点(GNS)提供无线服务。此外,根据Peukert效果,我们考虑无人机电池的实用非线性电池放电。因此,我们以一种新颖的方式提出问题,代表了基于公平的能源效率度量的最大化,并被称为公平能源效率(费用)。费用指标定义了一个系统,该系统对每用户服务的公平性和PAP的能源效率都非常重要。该法式问题采用非凸面问题的形式,并具有不可扣除的约束。为了获得解决方案,我们将问题表示为具有连续状态和动作空间的马尔可夫决策过程(MDP)。考虑到解决方案空间的复杂性,我们使用双胞胎延迟的深层确定性政策梯度(TD3)参与者 - 批判性深入强化学习(DRL)框架来学习最大化系统费用的政策。我们进行两种类型的RL培训来展示我们方法的有效性:第一种(离线)方法在整个训练阶段保持GN的位置相同;第二种方法将学习的政策概括为GN的任何安排,通过更改GN的位置,每次培训情节后。数值评估表明,忽视Peukert效应高估了PAP的播放时间,可以通过最佳选择PAP的飞行速度来解决。此外,用户公平,能源效率,因此可以通过有效地将PAP移动到GN上方,从而提高系统的费用价值。因此,我们注意到郊区,城市和茂密的城市环境的基线情景高达88.31%,272.34%和318.13%。
translated by 谷歌翻译